8 research outputs found

    Towards spatial and temporal analysis of facial expressions in 3D data

    Get PDF
    Facial expressions are one of the most important means for communication of emotions and meaning. They are used to clarify and give emphasis, to express intentions, and form a crucial part of any human interaction. The ability to automatically recognise and analyse expressions could therefore prove to be vital in human behaviour understanding, which has applications in a number of areas such as psychology, medicine and security. 3D and 4D (3D+time) facial expression analysis is an expanding field, providing the ability to deal with problems inherent to 2D images, such as out-of-plane motion, head pose, and lighting and illumination issues. Analysis of data of this kind requires extending successful approaches applied to the 2D problem, as well as the development of new techniques. The introduction of recent new databases containing appropriate expression data, recorded in 3D or 4D, has allowed research into this exciting area for the first time. This thesis develops a number of techniques, both in 2D and 3D, that build towards a complete system for analysis of 4D expressions. Suitable feature types, designed by employing binary pattern methods, are developed for analysis of 3D facial geometry data. The full dynamics of 4D expressions are modelled, through a system reliant on motion-based features, to demonstrate how the different components of the expression (neutral-onset-apex-offset) can be distinguished and harnessed. Further, the spatial structure of expressions is harnessed to improve expression component intensity estimation in 2D videos. Finally, it is discussed how this latter step could be extended to 3D facial expression analysis, and also combined with temporal analysis. Thus, it is demonstrated that both spatial and temporal information, when combined with appropriate 3D features, is critical in analysis of 4D expression data.Open Acces

    A Dynamic Approach to the Recognition of 3D Facial Expressions and Their Temporal Models

    Get PDF
    In this paper we propose a method that exploits 3D motion-based features between frames of 3D facial geometry sequences for dynamic facial expression recognition. An expressive sequence is modeled to contain an onset followed by an apex and an offset. Feature selection methods are applied in order to extract features for each of the onset and offset segments of the expression. These features are then used to train a Hidden Markov Model in order to model the full temporal dynamics of the expression. The proposed fully automatic system was tested in a subset of the BU-4DFE database for the recognition of happiness, anger and surprise. Comparisons with a similar system based on the motion extracted from facial intensity images was also performed. The attained results suggest that the use of the 3D information does indeed improve the recognition accuracy when compared to the 2D data

    Static and Dynamic 3D Facial Expression Recognition: A Comprehensive Survey

    Get PDF
    Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the user's experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose and illumination variations. In order to deal with these problems, 3D and 4D (dynamic 3D) recordings are increasingly used in expression analysis research. In this paper we survey the recent advances in 3D and 4D facial expression recognition. We discuss developments in 3D facial data acquisition and tracking, and present currently available 3D/4D face databases suitable for 3D/4D facial expressions analysis as well as the existing facial expression recognition systems that exploit either 3D or 4D data in detail. Finally, challenges that have to be addressed if 3D facial expression recognition systems are to become a part of future applications are extensively discussed

    Recognition of 3D facial expression dynamics

    Get PDF
    In this paper we propose a method that exploits 3D motion-based features between frames of 3D facial geometry sequences for dynamic facial expression recognition. An expressive sequence is modelled to contain an onset followed by an apex and an offset. Feature selection methods are applied in order to extract features for each of the onset and offset segments of the expression. These features are then used to train GentleBoost classifiers and build a Hidden Markov Model in order to model the full temporal dynamics of the expression. The proposed fully automatic system was employed on the BU-4DFE database for distinguishing between the six universal expressions: Happy, Sad, Angry, Disgust, Surprise and Fear. Comparisons with a similar 2D system based on the motion extracted from facial intensity images was also performed. The attained results suggest that the use of the 3D information does indeed improve the recognition accuracy when compared to the 2D data in a fully automatic manner

    Opportunistic infections and AIDS malignancies early after initiating combination antiretroviral therapy in high-income countries

    No full text
    Background: There is little information on the incidence of AIDS-defining events which have been reported in the literature to be associated with immune reconstitution inflammatory syndrome (IRIS) after combined antiretroviral therapy (cART) initiation. These events include tuberculosis, mycobacterium avium complex (MAC), cytomegalovirus (CMV) retinitis, progressive multifocal leukoencephalopathy (PML), herpes simplex virus (HSV), Kaposi sarcoma, non-Hodgkin lymphoma (NHL), cryptococcosis and candidiasis. Methods: We identified individuals in the HIV-CAUSAL Collaboration, which includes data from six European countries and the US, who were HIV-positive between 1996 and 2013, antiretroviral therapy naive, aged at least 18 years, hadCD4+ cell count and HIV-RNA measurements and had been AIDS-free for at least 1 month between those measurements and the start of follow-up. For each AIDS-defining event, we estimated the hazard ratio for no cART versus less than 3 and at least 3 months since cART initiation, adjusting for time-varying CD4+ cell count and HIV-RNA via inverse probability weighting. Results: Out of 96 562 eligible individuals (78% men) with median (interquantile range) follow-up of 31 [13,65] months, 55 144 initiated cART. The number of cases varied between 898 for tuberculosis and 113 for PML. Compared with non-cART initiation, the hazard ratio (95% confidence intervals) up to 3 months after cART initiation were 1.21 (0.90-1.63) for tuberculosis, 2.61 (1.05-6.49) for MAC, 1.17 (0.34-4.08) for CMV retinitis, 1.18 (0.62-2.26) for PML, 1.21 (0.83-1.75) for HSV, 1.18 (0.87-1.58) for Kaposi sarcoma, 1.56 (0.82-2.95) for NHL, 1.11 (0.56-2.18) for cryptococcosis and 0.77 (0.40-1.49) for candidiasis. Conclusion: With the potential exception of mycobacterial infections, unmasking IRIS does not appear to be a common complication of cART initiation in high-income countries
    corecore